159 research outputs found

    Bayesian Hierarchical Modelling for Tailoring Metric Thresholds

    Full text link
    Software is highly contextual. While there are cross-cutting `global' lessons, individual software projects exhibit many `local' properties. This data heterogeneity makes drawing local conclusions from global data dangerous. A key research challenge is to construct locally accurate prediction models that are informed by global characteristics and data volumes. Previous work has tackled this problem using clustering and transfer learning approaches, which identify locally similar characteristics. This paper applies a simpler approach known as Bayesian hierarchical modeling. We show that hierarchical modeling supports cross-project comparisons, while preserving local context. To demonstrate the approach, we conduct a conceptual replication of an existing study on setting software metrics thresholds. Our emerging results show our hierarchical model reduces model prediction error compared to a global approach by up to 50%.Comment: Short paper, published at MSR '18: 15th International Conference on Mining Software Repositories May 28--29, 2018, Gothenburg, Swede

    Improving Multi-Objective Test Case Selection by Injecting Diversity in Genetic Algorithms

    Get PDF
    A way to reduce the cost of regression testing consists of selecting or prioritizing subsets of test cases from a test suite according to some criteria. Besides greedy algorithms, cost cognizant additional greedy algorithms, multi-objective optimization algorithms, and Multi-Objective Genetic Algorithms (MOGAs), have also been proposed to tackle this problem. However, previous studies have shown that there is no clear winner between greedy and MOGAs, and that their combination does not necessarily produce better results. In this paper we show that the optimality of MOGAs can be significantly improved by diversifying the solutions (sub-sets of the test suite) generated during the search process. Specifically, we introduce a new MOGA, coined as DIV-GA (DIversity based Genetic Algorithm), based on the mechanisms of orthogonal design and orthogonal evolution that increase diversity by injecting new orthogonal individuals during the search process. Results of an empirical study conducted on eleven programs show that DIV-GA outperforms both greedy algorithms and the traditional MOGAs from the optimality point of view. Moreover, the solutions (sub-sets of the test suite) provided by DIV-GA are able to detect more faults than the other algorithms, while keeping the same test execution cost

    Continuous integration and delivery practices for cyber-physical systems : an interview-based study

    Get PDF
    Continuous Integration and Delivery (CI/CD) practices have shown several benefits for software development and operations, e.g., faster release cycles and early discovery of defects. For Cyber-Physical System (CPS) development, CI/CD can help achieving required goals, such as high dependability, yet it may be challenging to apply. This paper empirically investigates challenges, barriers, and their mitigation occurring when applying CI/CD practices to develop CPSs in 10 organizations working in 8 different domains. The study has been conducted through semi-structured interviews, by applying an open card sorting procedure together with a member-checking survey within the same organizations, and by validating the results through a further survey involving 55 professional developers. The study reveals several peculiarities in the application of CI/CD to CPSs. These include the need for (i) combining continuous and periodic builds, while balancing the use of Hardware-in-the-Loop (HiL) and simulators; (ii) coping with difficulties in software deployment (iii) accounting for simulators and HiL differing in their behavior; and (vi) combining hardware/software expertise in the development team. Our findings open the road towards recommenders aimed at supporting the setting and evolution of CI/CD pipelines, as well as university curricula requiring interdisciplinarity, such as knowledge about hardware, software, and their interplay

    The place of vericiguat in the landscape of treatment for heart failure with reduced ejection fraction

    Get PDF
    The significant morbidity and mortality associated with heart failure with reduced (HFrEF) or preserved ejection fraction (HFpEF) justify the search for novel therapeutic agents. The nitric oxide (NO)–soluble guanylate cyclase (sGC)-cyclic guanosine monophosphate (cGMP) pathway plays an important role in the regulation of cardiovascular function. This pathway is disrupted in HF resulting in decreased protection against myocardial injury. The sGC activator cinaciguat increases cGMP levels by direct, NO-independent activation of sGC, and may be particularly effective in conditions of increased oxidative stress and endothelial dysfunction, and then reduced NO levels, but this comes at the expense of a greater risk of hypotension. Conversely, sGC stimulators (riociguat and vericiguat) enhance sGC sensitivity to endogenous NO, and then exert a more physiological action. The phase 3 VICTORIA trial found that vericiguat is safe and effective in patients with HFrEF and recent HF decompensation. Therefore, adding vericiguat may be considered in individual patients with HFrEF, particularly those at higher risk of HF hospitalization; the efficacy of the sacubitril/valsartan-vericiguat combination in HFrEF is currently unknown

    Analysis of Snow Cover in the Sibillini Mountains in Central Italy

    Get PDF
    Research on solid precipitation and snow cover, especially in mountainous areas, suffers from problems related to the lack of on-site observations and the low reliability of measurements, which is often due to instruments that are not suitable for the environmental conditions. In this context, the study area is the Monti Sibillini National Park, and it is no exception, as it is a mountainous area located in central Italy, where the measurements are scarce and fragmented. The purpose of this research is to provide a characterization of the snow cover with regard to maximum annual snow depth, average snow depth during the snowy period, and days with snow cover on the ground in the Monti Sibillini National Park area, by means of ground weather stations, and also analyzing any trends over the last 30 years. For this research, in order to obtain reliable snow cover data, only data from weather stations equipped with a sonar system and manual weather stations, where the surveyor goes to the site each morning and checks the thickness of the snowpack and records, it were collected. The data were collected from 1 November to 30 April each year for 30 years, from 1991 to 2020; six weather stations were taken into account, while four more were added as of 1 January 2010. The longer period was used to assess possible ongoing trends, which proved to be very heterogeneous in the results, predominantly negative in the case of days with snow cover on the ground, while trends were predominantly positive for maximum annual snow depth and distributed between positive and negative for the average annual snow depth. The shorter period, 2010–2022, on the other hand, ensured the presence of a larger number of weather stations and was used to assess the correlation and presence of clusters between the various weather stations and, consequently, in the study area. Furthermore, in this way, an up-to-date nivometric classification of the study area was obtained (in terms of days with snow on the ground, maximum height of snowpack, and average height of snowpack), filling a gap where there had been no nivometric study in the aforementioned area. The interpolations were processed using geostatistical techniques such as co-kriging with altitude as an independent variable, allowing fairly precise spatialization, analyzing the results of cross-validation. This analysis could be a useful tool for hydrological modeling of the area, as well as having a clear use related to tourism and vegetation, which is extremely influenced by the nivometric variables in its phenology. In addition, this analysis could also be considered a starting point for the calibration of more recent satellite products dedicated to snow cover detection, in order to further improve the compiled climate characterizatio

    Indications of beta-adrenoceptor blockers in Takotsubo syndrome and theoretical reasons to prefer agents with vasodilating activity

    Get PDF
    Takotsubo syndrome (TTS) is estimated to account for 1–3% of all patients presenting with suspected ST-segment elevation myocardial infarction. A sudden surge in sympathetic nervous system is considered the cause of TTS. Nonetheless, no specific recommendations have been provided regarding β-blocking therapy. Apart from specific contra-indications (severe LV dysfunction, hypotension, bradycardia and corrected QT interval >500 ms), treatment with a β-blocker seems reasonable until full recovery of LV ejection fraction, though evidence is limited to a few animal studies, case reports or observational studies. In this review, we will reappraise the rationale for β-blocker therapy in TTS and speculate on the pathophysiologic basis for preferring non-selective agents with vasodilating activity over β1-selective drugs

    Recovering fitness gradients for interprocedural Boolean flags in search-based testing

    Get PDF
    National Research Foundation (NRF) Singapore under Corp Lab @ University scheme; National Research Foundation (NRF) Singapore under its NSoE Programm

    An NLP-based tool for software artifacts analysis

    Get PDF
    Software developers rely on various repositories and communication channels to exchange relevant information about their ongoing tasks and the status of overall project progress. In this context, semi-structured and unstructured software artifacts have been leveraged by researchers to build recommender systems aimed at supporting developers in different tasks, such as transforming user feedback in maintenance and evolution tasks, suggesting experts, or generating software documentation. More specifically, Natural Language (NL) parsing techniques have been successfully leveraged to automatically identify (or extract) the relevant information embedded in unstructured software artifacts. However, such techniques require the manual identification of patterns to be used for classification purposes. To reduce such a manual effort, we propose an NL parsingbased tool for software artifacts analysis named NEON that can automate the mining of such rules, minimizing the manual effort of developers and researchers. Through a small study involving human subjects with NL processing and parsing expertise, we assess the performance of NEON in identifying rules useful to classify app reviews for software maintenance purposes. Our results show that more than one-third of the rules inferred by NEON are relevant for the proposed task. Demo webpage: https://github.com/adisorbo/NEON too

    Automated identification and qualitative characterization of safety concerns reported in UAV software platforms

    Get PDF
    Unmanned Aerial Vehicles (UAVs) are nowadays used in a variety of applications. Given the cyber-physical nature of UAVs, software defects in these systems can cause issues with safety-critical implications. An important aspect of the lifecycle of UAV software is to minimize the possibility of harming humans or damaging properties through a continuous process of hazard identification and safety risk management. Specifically, safety-related concerns typically emerge during the operation of UAV systems, reported by end-users and developers in the form of issue reports and pull requests. However, popular UAV systems daily receive tens or hundreds of reports of varying types and quality. To help developers timely identifying and triaging safety-critical UAV issues, we (i) experiment with automated approaches (previously used for issue classification) for detecting the safety-related matters appearing in the titles and descriptions of issues and pull requests reported in UAV platforms, and (ii) propose a categorization of the main hazards and accidents discussed in such issues. Our results (i) show that shallow machine learning-based approaches can identify safety-related sentences with precision, recall, and F-measure values of about 80\%; and (ii) provide a categorization and description of the relationships between safety issue hazards and accidents

    LIPS vs MOSA: a Replicated Empirical Study on Automated Test Case Generation

    Get PDF
    Replication is a fundamental pillar in the construction of scientific knowledge. Test data generation for procedural programs can be tackled using a single-target or a many-objective approach. The proponents of LIPS, a novel single-target test generator, conducted a preliminary empirical study to compare their approach with MOSA, an alternative many-objective test generator. However, their empirical investigation suffers from several external and internal validity threats, does not consider complex programs with many branches and does not include any qualitative analysis to interpret the results. In this paper, we report the results of a replication of the original study designed to address its major limitations and threats to validity. The new findings draw a completely different picture on the pros and cons of single-target vs many-objective approaches to test case generation
    corecore